Goto

Collaborating Authors

 bayes factor


A Normative Theory for Causal Inference and Bayes Factor Computation in Neural Circuits

Neural Information Processing Systems

This study provides a normative theory for how Bayesian causal inference can be implemented in neural circuits. In both cognitive processes such as causal reasoning and perceptual inference such as cue integration, the nervous systems need to choose different models representing the underlying causal structures when making inferences on external stimuli. In multisensory processing, for example, the nervous system has to choose whether to integrate or segregate inputs from different sensory modalities to infer the sensory stimuli, based on whether the inputs are from the same or different sources. Making this choice is a model selection problem requiring the computation of Bayes factor, the ratio of likelihoods between the integration and the segregation models. In this paper, we consider the causal inference in multisensory processing and propose a novel generative model based on neural population code that takes into account both stimulus feature and stimulus reliability in the inference. In the case of circular variables such as heading direction, our normative theory yields an analytical solution for computing the Bayes factor, with a clear geometric interpretation, which can be implemented by simple additive mechanisms with neural population code. Numerical simulation shows that the tunings of the neurons computing Bayes factor are consistent with the opposite neurons discovered in dorsal medial superior temporal (MSTd) and the ventral intraparietal (VIP) areas for visual-vestibular processing. This study illuminates a potential neural mechanism for causal inference in the brain.


Prediction Markets as Bayesian Inverse Problems: Uncertainty Quantification, Identifiability, and Information Gain from Price-Volume Histories under Latent Types

Madrigal-Cianci, Juan Pablo, Maya, Camilo Monsalve, Breakey, Lachlan

arXiv.org Machine Learning

Prediction markets are often described as mechanisms that ``aggregate information'' into prices, yet the mapping from dispersed private information to observed market histories is typically noisy, endogenous, and shaped by heterogeneous and strategic participation. This paper formulates prediction markets as Bayesian inverse problems in which the unknown event outcome \(Y\in\{0,1\}\) is inferred from an observed history of market-implied probabilities and traded volumes. We introduce a mechanism-agnostic observation model in log-odds space in which price increments conditional on volume arise from a latent mixture of trader types. The resulting likelihood class encompasses informed and uninformed trading, heavy-tailed microstructure noise, and adversarial or manipulative flow, while requiring only price and volume as observables. Within this framework we define posterior uncertainty quantification for \(Y\), provide identifiability and well-posedness criteria in terms of Kullback--Leibler separation between outcome-conditional increment laws, and derive posterior concentration statements and finite-sample error bounds under general regularity assumptions. We further study stability of posterior odds to perturbations of the observed price--volume path and define realized and expected information gain via the posterior-vs-prior KL divergence and mutual information. The inverse-problem formulation yields explicit diagnostics for regimes in which market histories are informative and stable versus regimes in which inference is ill-posed due to type-composition confounding or outcome--nuisance symmetries. Extensive experiments on synthetic data validate our theoretical predictions regarding posterior concentration rates and identifiability thresholds.



A Normative Theory for Causal Inference and Bayes Factor Computation in Neural Circuits

Wenhao Zhang, Si Wu, Brent Doiron, Tai Sing Lee

Neural Information Processing Systems

This study provides a normative theory for how Bayesian causal inference can be implemented in neural circuits. In both cognitive processes such as causal reasoning and perceptual inference such as cue integration, the nervous systems need to choose different models representing the underlying causal structures when making inferences on external stimuli.



A Appendix

Neural Information Processing Systems

To quote Armitage [1993], "the classical theory of experimental design deals predominantly with A. Fisher, worked in agricultural research, where the outcome of a field trial is not available until Consider these problems first from the perspective of hypothesis testing . Perform the test too early, and the Type II error probability will be high, resulting in many small effects being undetected. For instance, consider one-sided "no-harm" testing applications where the goal is to detect (possibly To address this concern, the experimenter might wish to estimate the effect by computing a confidence interval. The experimenter cannot hope to "monitor" the effect of each arm by computing multiple fixed-n Instead of stopping the experiment at a predetermined sample size, it is more natural and useful for the stopping rule to be data-dependent in sequential applications. That is, to perform optional continuation .